Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GPU] Periodic Coverity roundup #2840

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

atkassen
Copy link
Contributor

@atkassen atkassen commented Mar 7, 2025

Addresses most low severity Coverity hits for GPU and GEMM components.

@atkassen atkassen requested review from a team as code owners March 7, 2025 19:02
@github-actions github-actions bot added the platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel label Mar 7, 2025
@atkassen atkassen self-assigned this Mar 7, 2025
@atkassen atkassen force-pushed the akassen/coverity branch 2 times, most recently from c159499 to 7bd3419 Compare March 8, 2025 02:38
@atkassen atkassen requested review from a team as code owners March 18, 2025 22:34
@@ -133,7 +133,7 @@ class reduce_impl_t {
auto a_blocks = a.blocks();
a_blocks.erase(a_blocks.begin());
a = layout_t(a.type(), a.ndims(), 0, a_blocks);
return find_1d_tile(a, b);
return find_1d_tile(std::move(a), std::move(b));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like the signature of the function can be changed instead:
tensor_t find_1d_tile(const layout_t &a, const layout_t &b) const {
rather than doing moves.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't make that change, as find_1d_tile requires mutable layout_ts.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might be wrong, but it looks like when a gets re-created, just a new object will do fine, because the output is a different type and doesn't seem to have a connection with a lifetime (as argument makes a copy of it).

@atkassen atkassen force-pushed the akassen/coverity branch 6 times, most recently from 7ce3826 to f184df4 Compare March 21, 2025 18:02
@atkassen
Copy link
Contributor Author

make test
disable test_device_cpu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants